512 research outputs found
Using MicroPET Imaging in Quantitative Verification of Acupuncture Effect in Ischemia Stroke Treatment
While acupuncture has survived several thousand years’ evolution of medical practice, its function still remains as a myth from the view point of modern medicine. Our goal in this paper is to quantitatively understand the function of acupuncture in ischemia stroke treatment. We carried out a comparative study using the Sprague Dawley rat animal model. We induced the focal cerebral ischemia in the rats using the middle cerebral artery occlusion (MCAO) procedure. For each rat from the real acupuncture group (n = 40), sham acupoint treatment group (n = 54), and blank control group (n = 16), we acquired 3-D FDG-microPET images at baseline, after MCAO, and after treatment (i.e., real acupuncture, sham acupoint treatment, or resting according to the group assignment), respectively. After verifying that the injured area is in the right hemisphere of the cerebral cortex in the brain by using magnetic resonance imaging(MRI) and triphenyl tetrazolium cchloride (TTC)-staining, we directly compared the glucose metabolism in the right hemisphere of each rat. We carried out t-test and permutation test on the image data. Both tests demonstrated that acupuncture had a more positive effect than non-acupoint stimulus and blank control (P < 0.025) in increasing the glucose metabolic level in the stroke-injured area in the brain, while there was no statistically significant difference between non-acupoint stimulus and blank control (P>0.15). The immediate positive effect of acupuncture over sham acupoint treatment and blank control is verified using our experiments. The long-term benefit of acupuncture needs to be further studied
Prototype-Driven and Multi-Expert Integrated Multi-Modal MR Brain Tumor Image Segmentation
For multi-modal magnetic resonance (MR) brain tumor image segmentation,
current methods usually directly extract the discriminative features from input
images for tumor sub-region category determination and localization. However,
the impact of information aliasing caused by the mutual inclusion of tumor
sub-regions is often ignored. Moreover, existing methods usually do not take
tailored efforts to highlight the single tumor sub-region features. To this
end, a multi-modal MR brain tumor segmentation method with tumor
prototype-driven and multi-expert integration is proposed. It could highlight
the features of each tumor sub-region under the guidance of tumor prototypes.
Specifically, to obtain the prototypes with complete information, we propose a
mutual transmission mechanism to transfer different modal features to each
other to address the issues raised by insufficient information on single-modal
features. Furthermore, we devise a prototype-driven feature representation and
fusion method with the learned prototypes, which implants the prototypes into
tumor features and generates corresponding activation maps. With the activation
maps, the sub-region features consistent with the prototype category can be
highlighted. A key information enhancement and fusion strategy with
multi-expert integration is designed to further improve the segmentation
performance. The strategy can integrate the features from different layers of
the extra feature extraction network and the features highlighted by the
prototypes. Experimental results on three competition brain tumor segmentation
datasets prove the superiority of the proposed method
Application and Development of CRISPR/Cas9 Technology in Pig Research
Pigs provide valuable meat sources, disease models, and research materials for humans. However, traditional methods no longer meet the developing needs of pig production. More recently, advanced biotechnologies such as SCNT and genome editing are enabling researchers to manipulate genomic DNA molecules. Such methods have greatly promoted the advancement of pig research. Three gene editing platforms including ZFNs, TALENs, and CRISPR/Cas are becoming increasingly prevalent in life science research, with CRISPR/Cas9 now being the most widely used. CRISPR/Cas9, a part of the defense mechanism against viral infection, was discovered in prokaryotes and has now developed as a powerful and effective genome editing tool that can introduce and enhance modifications to the eukaryotic genomes in a range of animals including insects, amphibians, fish, and mammals in a predictable manner. Given its excellent characteristics that are superior to other tailored endonucleases systems, CRISPR/Cas9 is suitable for conducting pig-related studies. In this review, we briefly discuss the historical perspectives of CRISPR/Cas9 technology and highlight the applications and developments for using CRISPR/Cas9-based methods in pig research. We will also review the choices for delivering genome editing elements and the merits and drawbacks of utilizing the CRISPR/Cas9 technology for pig research, as well as the future prospects
Identification and characterization of high affinity antisense PNAs for the human unr (upstream of N-ras) mRNA which is uniquely overexpressed in MCF-7 breast cancer cells
We have recently shown that an MCF-7 tumor can be imaged in a mouse by PET with (64)Cu-labeled Peptide nucleic acids (PNAs) tethered to the permeation peptide Lys(4) that recognize the uniquely overexpressed and very abundant upstream of N-ras or N-ras related gene (unr mRNA) expressed in these cells. Herein we describe how the high affinity antisense PNAs to the unr mRNA were identified and characterized. First, antisense binding sites on the unr mRNA were mapped by an reverse transcriptase random oligonucleotide library (RT-ROL) method that we have improved, and by a serial analysis of antisense binding sites (SAABS) method that we have developed which is similar to another recently described method. The relative binding affinities of oligodeoxynucleotides (ODNs) complementary to the antisense binding sites were then qualitatively ranked by a new Dynabead-based dot blot assay. Dissociation constants for a subset of the ODNs were determined by a new Dynabead-based solution assay and were found to be 300 pM for the best binders in 1 M salt. PNAs corresponding to the ODNs with the highest affinities were synthesized with an N-terminal CysTyr and C-terminal Lys(4) sequence. Dissociation constants of these hybrid PNAs were determined by the Dynabead-based solution assay to be about 10 pM for the highest affinity binders
Adversarial Self-Attack Defense and Spatial-Temporal Relation Mining for Visible-Infrared Video Person Re-Identification
In visible-infrared video person re-identification (re-ID), extracting
features not affected by complex scenes (such as modality, camera views,
pedestrian pose, background, etc.) changes, and mining and utilizing motion
information are the keys to solving cross-modal pedestrian identity matching.
To this end, the paper proposes a new visible-infrared video person re-ID
method from a novel perspective, i.e., adversarial self-attack defense and
spatial-temporal relation mining. In this work, the changes of views, posture,
background and modal discrepancy are considered as the main factors that cause
the perturbations of person identity features. Such interference information
contained in the training samples is used as an adversarial perturbation. It
performs adversarial attacks on the re-ID model during the training to make the
model more robust to these unfavorable factors. The attack from the adversarial
perturbation is introduced by activating the interference information contained
in the input samples without generating adversarial samples, and it can be thus
called adversarial self-attack. This design allows adversarial attack and
defense to be integrated into one framework. This paper further proposes a
spatial-temporal information-guided feature representation network to use the
information in video sequences. The network cannot only extract the information
contained in the video-frame sequences but also use the relation of the local
information in space to guide the network to extract more robust features. The
proposed method exhibits compelling performance on large-scale cross-modality
video datasets. The source code of the proposed method will be released at
https://github.com/lhf12278/xxx.Comment: 11 pages,8 figure
- …